8 research outputs found

    DIAGNOSTICS OF DEMENTIA FROM STRUCTURAL AND FUNCTIONAL MARKERS OF BRAIN ATROPHY WITH MACHINE LEARNING

    Get PDF
    Dementia is a condition in which higher mental functions are disrupted. It currently affects an estimated 57 million people throughout the world. A dementia diagnosis is difficult since neither anatomical indicators nor functional testing is currently sufficiently sensitive or specific. There remains a long list of outstanding issues that must be addressed. First, multimodal diagnosis has yet to be introduced into the early stages of dementia screening. Second, there is no accurate instrument for predicting the progression of pre-dementia. Third, non-invasive testing cannot be used to provide differential diagnoses. By creating ML models of normal and accelerated brain aging, we intend to better understand brain development. The combined analysis of distinct imaging and functional modalities will improve diagnostics of accelerated decline with advanced data science techniques, which is the main objective of our study. Hypothetically, an association between brain structural changes and cognitive performance differs between normal and accelerated aging. We propose using brain MRI scans to estimate the cognitive status of the cognitively preserved examinee and develop a structure-function model with machine learning (ML). Accelerated ageing is suspected when a scanned individual’s findings do not align with the usual paradigm. We calculate the deviation from the model of normal ageing (DMNA) as the error of cognitive score prediction. Then the obtained data may be compared with the results of conducted cognitive tests. The greater the difference between the expected and observed values, the greater the risk of dementia. DMNA can discern between cognitively normal and mild cognitive impairment (MCI) patients. The model was proven to perform well in the MCI-versus-Alzheimer’s disease (AD) categorization. DMNA is a potential diagnostic marker of dementia and its types

    Multimodal diagnostics in multiple sclerosis: predicting disability and conversion from relapsing-remitting to secondary progressive disease course - protocol for systematic review and meta-analysis

    Get PDF
    Background The number of patients diagnosed with multiple sclerosis (MS) has increased significantly over the last decade. The challenge is to identify the transition from relapsing-remitting to secondary progressive MS. Since available methods to examine patients with MS are limited, both the diagnostics and prognostication of disease progression would benefit from the multimodal approach. The latter combines the evidence obtained from disparate radiologic modalities, neurophysiological evaluation, cognitive assessment and molecular diagnostics. In this systematic review we will analyse the advantages of multimodal studies in predicting the risk of conversion to secondary progressive MS. Methods and analysis We will use peer-reviewed publications available in Web of Science, Medline/PubMed, Scopus, Embase and CINAHL databases. In vivo studies reporting the predictive value of diagnostic methods will be considered. Selected publications will be processed through Covidence software for automatic deduplication and blind screening. Two reviewers will use a predefined template to extract the data from eligible studies. We will analyse the performance metrics (1) for the classification models reflecting the risk of secondary progression: sensitivity, specificity, accuracy, area under the receiver operating characteristic curve, positive and negative predictive values; (2) for the regression models forecasting disability scores: the ratio of mean absolute error to the range of values. Then, we will create ranking charts representing performance of the algorithms for calculating disability level and MS progression. Finally, we will compare the predictive power of radiological and radiomical correlates of clinical disability and cognitive impairment in patients with MS. Ethics and dissemination The study does not require ethical approval because we will analyse publicly available literature. The project results will be published in a peer-review journal and presented at scientific conferences. PROSPERO registration number CRD42022354179

    Deep Learning-Based Automatic Assessment of Lung Impairment in COVID-19 Pneumonia: Predicting Markers of Hypoxia With Computer Vision

    Get PDF
    BackgroundHypoxia is a potentially life-threatening condition that can be seen in pneumonia patients.ObjectiveWe aimed to develop and test an automatic assessment of lung impairment in COVID-19 associated pneumonia with machine learning regression models that predict markers of respiratory and cardiovascular functioning from radiograms and lung CT.Materials and MethodsWe enrolled a total of 605 COVID-19 cases admitted to Al Ain Hospital from 24 February to 1 July 2020 into the study. The inclusion criteria were as follows: age ≥ 18 years; inpatient admission; PCR positive for SARS-CoV-2; lung CT available at PACS. We designed a CNN-based regression model to predict systemic oxygenation markers from lung CT and 2D diagnostic images of the chest. The 2D images generated by averaging CT scans were analogous to the frontal and lateral view radiograms. The functional (heart and breath rate, blood pressure) and biochemical findings (SpO2, HCO3-, K+, Na+, anion gap, C-reactive protein) served as ground truth.ResultsRadiologic findings in the lungs of COVID-19 patients provide reliable assessments of functional status with clinical utility. If fed to ML models, the sagittal view radiograms reflect dyspnea more accurately than the coronal view radiograms due to the smaller size and the lower model complexity. Mean absolute error of the models trained on single-projection radiograms was approximately 11÷12% and it dropped by 0.5÷1% if both projections were used (11.97 ± 9.23 vs. 11.43 ± 7.51%; p = 0.70). Thus, the ML regression models based on 2D images acquired in multiple planes had slightly better performance. The data blending approach was as efficient as the voting regression technique: 10.90 ± 6.72 vs. 11.96 ± 8.30%, p = 0.94. The models trained on 3D images were more accurate than those on 2D: 8.27 ± 4.13 and 11.75 ± 8.26%, p = 0.14 before lung extraction; 10.66 ± 5.83 and 7.94 ± 4.13%, p = 0.18 after the extraction. The lung extraction boosts 3D model performance unsubstantially (from 8.27 ± 4.13 to 7.94 ± 4.13%; p = 0.82). However, none of the differences between 3D and 2D were statistically significant.ConclusionThe constructed ML algorithms can serve as models of structure-function association and pathophysiologic changes in COVID-19. The algorithms can improve risk evaluation and disease management especially after oxygen therapy that changes functional findings. Thus, the structural assessment of acute lung injury speaks of disease severity

    Prediction of COVID-19 severity using laboratory findings on admission: informative values, thresholds, ML model performance

    No full text
    Background Despite the necessity, there is no reliable biomarker to predict disease severity and prognosis of patients with COVID-19. The currently published prediction models are not fully applicable to clinical use.Objectives To identify predictive biomarkers of COVID-19 severity and to justify their threshold values for the stratification of the risk of deterioration that would require transferring to the intensive care unit (ICU).Methods The study cohort (560 subjects) included all consecutive patients admitted to Dubai Mediclinic Parkview Hospital from February to May 2020 with COVID-19 confirmed by the PCR. The challenge of finding the cut-off thresholds was the unbalanced dataset (eg, the disproportion in the number of 72 patients admitted to ICU vs 488 non-severe cases). Therefore, we customised supervised machine learning (ML) algorithm in terms of threshold value used to predict worsening.Results With the default thresholds returned by the ML estimator, the performance of the models was low. It was improved by setting the cut-off level to the 25th percentile for lymphocyte count and the 75th percentile for other features. The study justified the following threshold values of the laboratory tests done on admission: lymphocyte count <2.59×109/L, and the upper levels for total bilirubin 11.9 μmol/L, alanine aminotransferase 43 U/L, aspartate aminotransferase 32 U/L, D-dimer 0.7 mg/L, activated partial thromboplastin time (aPTT) 39.9 s, creatine kinase 247 U/L, C reactive protein (CRP) 14.3 mg/L, lactate dehydrogenase 246 U/L, troponin 0.037 ng/mL, ferritin 498 ng/mL and fibrinogen 446 mg/dL.Conclusion The performance of the neural network trained with top valuable tests (aPTT, CRP and fibrinogen) is admissible (area under the curve (AUC) 0.86; 95% CI 0.486 to 0.884; p<0.001) and comparable with the model trained with all the tests (AUC 0.90; 95% CI 0.812 to 0.902; p<0.001). Free online tool at https://med-predict.com illustrates the study results

    An improvised CNN model for fake image detection

    No full text
    The last decade has witnessed a multifold growth of image data courtesy of the emergence of social networking services like Facebook, Instagram, LinkedIn etc. The major menace faced by today’s world is the issue of doctored images, where-in the photographs are altered using a rich set of ways like splicing, copy-move, removal to change their meaning and hence demands serious mitigation mechanisms to be thought of. The problem when seen from the prism of Artificial intelligence is a binary classification one, where-in the characterization must be drawn between the original and the manipulated images. This research work proposes a computer vision model based on Convolution Neural Networks for fake image detection. A comparative analysis of 6 popular traditional machine learning models and 6 different CNN architectures to select the best possible model for further experimentation. The proposed model based on ResNet50 employed with powerful preprocessing techniques results in a perfect fake image detector having a total accuracy of 0.99 having an improvement of around 18% performance with other models

    Customized Rule-Based Model to Identify At-Risk Students and Propose Rational Remedial Actions

    No full text
    Detecting at-risk students provides advanced benefits for improving student retention rates, effective enrollment management, alumni engagement, targeted marketing improvement, and institutional effectiveness advancement. One of the success factors of educational institutes is based on accurate and timely identification and prioritization of the students requiring assistance. The main objective of this paper is to detect at-risk students as early as possible in order to take appropriate correction measures taking into consideration the most important and influential attributes in students’ data. This paper emphasizes the use of a customized rule-based system (RBS) to identify and visualize at-risk students in early stages throughout the course delivery using the Risk Flag (RF). Moreover, it can serve as a warning tool for instructors to identify those students that may struggle to grasp learning outcomes. The module allows the instructor to have a dashboard that graphically depicts the students’ performance in different coursework components. The at-risk student will be distinguished (flagged), and remedial actions will be communicated to the student, instructor, and stakeholders. The system suggests remedial actions based on the severity of the case and the time the student is flagged. It is expected to improve students’ achievement and success, and it could also have positive impacts on under-performing students, educators, and academic institutions in general

    AI applications in robotics, diagnostic image analysis and precision medicine: Current limitations, future trends, guidelines on CAD systems for medicine

    No full text
    Background: AI in medicine has been recognized by both academia and industry in revolutionizing how healthcare services will be offered by providers and perceived by all stakeholders. Objectives: We aim to review recent tendencies in building AI applications for medicine and foster its further development by outlining obstacles. Sub-objectives: (1) to highlight AI techniques that we have identified as key areas of AI-related research in healthcare; (2) to offer guidelines on building reliable AI-based CAD-systems for medicine; and (3) to reveal open research questions, challenges, and directions for future research. Methods: To address the tasks, we performed a systematic review of the references on the main branches of AI applications for medical purposes. We focused primarily on limitations of the reviewed studies. Conclusions: This study provides a summary of AI-related research in healthcare, it discusses the challenges and proposes open research questions for further research. Robotics has taken huge leaps in improving the healthcare services in a variety of medical sectors, including oncology and surgical interventions. In addition, robots are now replacing human assistants as they learn to become more sociable and reliable. However, there are challenges that must still be addressed to enable the use of medical robots in diagnostics and interventions. AI for medical imaging eliminates subjectivity in a visual diagnostic procedure and allows for the combining of medical imaging with clinical data, lifestyle risks and demographics. Disadvantages of AI solutions for radiology include both a lack of transparency and dedication to narrowed diagnostic questions. Designing an optimal automatic classifier should incorporate both expert knowledge on a disease and state-of-the-art computer vision techniques. AI in precision medicine and oncology allows for risk stratification due to genomics aberrations discovered on molecular testing. To summarize, AI cannot substitute a medical doctor. However, medicine may benefit from robotics, a CAD, and AI-based personalized approach

    Automatic Detection and Classification of Epileptic Seizures from EEG Data: Finding Optimal Acquisition Settings and Testing Interpretable Machine Learning Approach

    No full text
    Deep learning (DL) is emerging as a successful technique for automatic detection and differentiation of spontaneous seizures that may otherwise be missed or misclassified. Herein, we propose a system architecture based on top-performing DL models for binary and multigroup classifications with the non-overlapping window technique, which we tested on the TUSZ dataset. The system accurately detects seizure episodes (87.7% Sn, 91.16% Sp) and carefully distinguishes eight seizure types (95–100% Acc). An increase in EEG sampling rate from 50 to 250 Hz boosted model performance: the precision of seizure detection rose by 5%, and seizure differentiation by 7%. A low sampling rate is a reasonable solution for training reliable models with EEG data. Decreasing the number of EEG electrodes from 21 to 8 did not affect seizure detection but worsened seizure differentiation significantly: 98.24 ± 0.17 vs. 85.14 ± 3.14% recall. In detecting epileptic episodes, all electrodes provided equally informative input, but in seizure differentiation, their informative value varied. We improved model explainability with interpretable ML. Activation maximization highlighted the presence of EEG patterns specific to eight seizure types. Cortical projection of epileptic sources depicted differences between generalized and focal seizures. Interpretable ML techniques confirmed that our system recognizes biologically meaningful features as indicators of epileptic activity in EEG
    corecore